Video Editing

Video File Formats and "Cinema" vs "Video"

MPEG's, AVI's and Web Formats

File Formats

AVI (Audio video interleave) files are native windows movie file formats. This is the un-compressed form. The first video files placed on the web were versions of AVI files. Depending on the specifications AVI files can be very small. Most are very large. Too large for the web. Too large for CD's. Various compressed formats were devised, such as MPEG's.

MPEG's (the name comes from Motion Picture Expert's Group) are among the most compressed of formats. DVD use various types of MPEG compression. The files extension is typically "mpg" after the filename.

But MPEG's are too large normally for easy web delivery. Apple's QuickTime was the first format designed with the internet in mind. RealMedia was next and then Windows Media. QuickTime, RealMedia and Windows Media all offer various bandwidth sizes. A number of methods are used to save movies in various bandwidths. The number of frames per second are reduced. The width and height of the frame size are reduced. Image-compression methods are applied within each of the images for each frame.

Memory Card and Hard Drive Files

All new cameras now either fully record to files directly on a hard drive or memory card or are quickly going there. This means we still need a way to read old tapes. So "legacy" cameras will be needed. It is already a good idea, if you have a tape library, to purchase cameras which can read the tapes.


Video Tape Formats

The first practical video tape recorder was introduced in 1956 by Ampex after more than four years of work. Charles Ginzburg (left) was the project leader for that team. The videotape format was called Quadruplex. It used a 2-inch wide tape. Ampex also designed a video tape recorder for consumers in 1963 (VR 1500/600).

A dizzying array of video formats were introduced in the years since 1956. Those first recorders were reel-to-reel recorders. Cassettes enclosed both reels into a single holder (the cassette shell). The life of those first recording heads was only 100 hours. The Ampex development team included Charles Ginsburg, Fred Pfost, Shelby Henderson, Ray Dolby, Alex Maxey, and Charles E. Anderson.

The first known video tape broadcast was November 30, 1956. CBS's Television City in Hollywood re-broadcast a delayed for the west coast news program, "Douglas Edwards and the News." By 1957 NBC and ABC were also using the Mark IV Ampex video recorder. This thing was the size of a desk.

(Note: Ray Dolby was a 19-year old engineering student at the time who became part of the project, was drafted into the army and returned to the project after his service. He would later create Dolby sound technology found in recorders, players, movie theaters and more.)

 


 

 

Tape Formats - a revolution from film

Tape cameras are no longer being produced (almost). Tape camers are still valuable as a means of reading archive tapes. They are also vulnerable to not working the next time you need to read old tapes. Hold on to your old, still working, machines. Don't use them for new shooting. Keep running them from time to time to make sure they don't stop working on you (such as setting permanent indentations in rubber rollers).

The professional formats are the worst because these have changed so rapidly over the last 35+ years. The amateur and prosumer formats are the most usable in terms of numbers of items. The irony in these two levels of format is that the professional formats have better color depth and image quality overall but because the formats were not standardized the way formats were for non-high-level pro markets, the better professional recordings are often unusable or otherwise lost to obsolescence.

Format
Name
Analog
Digital
Resolution Time Description
The Analog Formats
VHS Analog 250 lines 120 mins Video Home System (1976)
VHS-C Analog 250 lines 40 mins The "C" stands for compact. The cassette is smaller than regular VHS but uses the same tape. It plays in regular VHS machines with a special adapter.
8mm Analog 270 lines 120 mins 1983

Betacam



Betacam Sp

Analog

300 lines


340 lines

varies

(1982) 1/2-inch tape in cassettes for pro use. Records component signals (separate red, green blue) for broadcast quality in analog format.

Betacam SP (Superior Performance) in 1986 increased to 340 lines. Was TV-station standard into late 90's.

MII Analog 340 lines varies 1985 - as competitor to Betacam. Splits signal into red, green and blue (component signals).
S-VHS Analog 400 lines 160 mins The "S" stands for "Super" - These are easy to confuse with regular VHS tapes. They are the same size but need a special S-VHS machine to play. “S-video” separates the chrominance and luminance signals (color and brightness).
Hi-8 Analog 400 lines 120 mins 1989 - Notice that this small cassette has the same resolution as S-VHS. Prices on these are now (early 2004) running about $300. When Hi-8 cameras first came out in the early 90's they were commonly $2,000 to $2,500.
The digital formats
Digital-8

(D-8)
Digital 500 lines 60 mins

Uses the same size cassette as 8mm and Hi-8 and can playback both 8mm and Hi-8. For recording this format shares cassettes with Hi-8.

This makes Digital-8 cameras backward compatible (if you own 8mm and Hi-8 tapes) and makes Digital-8 cameras a method for capturing 8mm and Hi-8 tapes directly to a computer or record to a digital tape on another camera.

For persons who already have a lot of 8mm or Hi-8 tapes A Digital-8 camera is often a good move into digital at low cost. At this writing Digital-8's start at about $400.

DV Digital 500 lines 60 mins 1996 - first digital available to consumers. Nearly loss-less picture.
MiniDV Digital 500 lines 60 mins See DV above.
Micro MV Digital
MPEG-2
530 lines 60 mins obsolete: MPEG-2 data recorded on a very small cassette, no longer available
DVCAM Digital 530 lines varies 1996 - Includes a memory chip in cassette allowing logging (shot sheet) types of information to be recorded.

Tape or Tapeless

HDV is an in-between format originally designed for to place HiDef video on existing Standard Def tapes. The camera itself aquires images in full HiDef but then records the images to standard-size tapes. This also records in the same format to CompactFlash and SDHC, SDXC (or other) memory cards via recording units either attached directly or via the FireWire port. This is a format which is edging out of the scene. One of the first steps has been integrating recording units into existing HDV cameras in place of tape drives.

HDV Digital 1080 lines 60 mins Developed by JVC and Sony who were joined by Canon and Sharp to form the HDV consortium (2003). Usually stored only to tape but some cameras and digital recorders allow tapeless or tape+tapeless storage.
HDV1 Aquires image at 1280x720 pixels
HDV2 Aquires image at 1920x1080 pixels
Creates "m2t" files during capture

Tapeless - a revolution from tape

Everything from now on has some version of tapeless recording. Hard drive recorders have been around for some time. Memory cards for a little less time. Some have fixed memory cards and others have removable card slots and still others have both options.

Tapeless recording is a huge weight lifted for those who formerly used tape. Tape captures at the same speed as the time of the original event. So an hour's recording takes and hour's capture during which you need to keep a constant eye on the capture process. With files, there is simply the speed of downloading from camera drive or more directly from memory cards and that not only lets you set up the file copy and let it go, it also copies the file in a fraction of the time it took to record. It is also much more reliable than tape which has problems in terms of the physical of the tape and the read head which can range from drop outs during capture to tangles or stretched tape or tape losing its coating with age or usage).

The main problems with tapeless formats have to do with file archiving on various media, and usable (readable) file formats by editors.

XDCam Digital 1080 lines varies based on amount of storage capacity and on compression quality settings 2003 - From Sony (to market at the start of 2004) Originally this was a recording made to a dust-sealed 12cm optical disk. This format was retained for professional tapeless video on SxS (say: es and es) cards. In this it is similar to Panasonic's P2 cards except the SxS are designed for the newer, smaller PC-card form.
P2 Digital 1080   2004 Panasonic - The main rival to XDCam. Very costly memory cards (P2 cards).
AVCHD Digital 1080 lines varies based on amount of storage capacity and on compression quality settings

AVCHD ( A dvanced V ideo C oding H igh D efinition) to record and play back high definition video - From 2006 and developed by Sony and Panasonic. This format at first had various artifact problems and could put an extra load on video editors, requiring a lot more processing. It is now a professional format using coding designated as MPEG-4 AVC/H.264 (or just AVC)

See: MPEG-4 AVC/H.264

A few file formats

Unlike JPEG for stills, there are no real imaging standards for video. Arguably the closest is JPEG 2000 but this is seldom used by anyone. That doesn't mean there are not generally recognized formats, just that there is no real format agreed to by the various competing camera makers. Instead, the camera makers have, from the start, introduced a bewildering array of various formats, some of which are compatible with popular video editors and some are not. These are competing and largely proprietary.

Storage of video files on disk is always problematic. Hard drives are known to fail and keeping an archive requires periodic inspection and transfer from old disks with old disk formats to new disks.

  NOTE: AVI, MOV, MXF and some others are "Container" files. That means that the file extension designates a file which can hold video content in various formats. The ability to read interpret the formats for an editor is dependent on "codecs." Codecs are software carrying the specifications to read and write various video files.
AVI Audio Video Interleave (also Audio Video Interleaved ), known by its acronym AVI , is a multimedia container format introduced by Microsoft in November 1992 as part of its Video for Windows technology. AVI files can contain both audio and video data in a file container that allows synchronous audio-with-video playback. Like the DVD video format , AVI files support multiple streaming audio and video, although these features are seldom used. Most AVI files also use the file format extensions developed by the Matrox OpenDML group in February 1996. These files are supported by Microsoft, and are unofficially called "AVI 2.0".
MOV

The QuickTime File Format (QTFF) is designed to accommodate the many kinds of data that need to be stored in order to work with digital multimedia. The QTFF is an ideal format for the exchange of digital media between devices, applications, and operating systems because it can be used to describe almost any media structure.

The file format is object-oriented, consisting of a flexible collection of objects that is easily parsed and easily expanded. Unknown objects can simply be ignored or skipped, allowing considerable forward compatibility as new object types are introduced. This was introduced December 2, 1991 as a multimedia add on the Mac's System 6.

Version 2.0 came out in 1994 with the Mac version in February and a Windows version in November. The MOV format inspired Premiere's original programmers to design a more advanced editor called "Key Grip" (at Macromedia) which was eventually renamed "Final Cut" to sell it. It was bought in 1998 by Steve Jobs and brought out in 1999 as Apple Final Cut.

MTS filename extension - can have various contents using various codecs
M2T filename extension - can have various contents using various codecs
MXF filename extension - can have various contents using various codecs
MP4 filename extension - can have various contents using various codecs

 

Video Imaging Standards
Motion JPEG 2000 Filename extensions are .mj2 and .mjp2 - seldom used. Scaleable wavelet compression builds on multiple layers of resolution, encodes each frame.
H.261

Nov 1988 - ITU-T video coding standard designed for transmission over ISDN lines Frame sizes: 352x288 and 176x144 with 4:2:0 sampling

MPEG-1 Nov 1992 - Various filename extensions: .mpg, .mpeg, .mp1, .mp2, .mp3, .m1v, .m1a, .m2a, .mpa, .mpv - is a "lossy" compression scheme to compress VHS and CD audio to 1.5 mbps to make video CDs, and digital audio broadcasting (the mp3 audio format is contained within this standard)
MPEG-2 (H.262) - 1995 - can have various filename extensions - can compress video by 15-30 times Like MPEG-1 but with support for interlaced video. Not good at low bit rates, it is better than using MPEG-1 a bit rates higher that 3 mbps. Added a 4:2:2 profile in 1996

MPEG-4

H.264

1998 - based on the QuickTime file format - can have various filename extensions. In addition to MPEG 1 & 2 it adds 3-D support with VRML. used in codecs such as DivX, Xvid, Nero Digital AVC, 3ivx, MPEG-4 part 10, Quicktime (MOV), and various HiDef media: Blu-ray Disc, MP4

H.264 is used for Blu-Ray (one of the Blu-Raycodecs). Also web videos from Vimeo, YouTube, iTunes, Adobe Flash Player, direct-broadcast satellite TV, cable TV, real-time videoconferencing.

AVCHD
AVCHD 3D

AVCHD Lite
AVCCAM
NXCAM

2006 devoloped by Sony and Panasonic together. can have various filename extensions. Supports both SD (standard def) and HiDef and includes 720, 1080 and 50 or 60 frames per sec in both progressive and interlaced as well as stereoscopic video (AVCHD 3D)

Can be used directly as files on Blu-ray discs or DVDs

HDV 2003 - Developed by JVC and supported by Sony, Canon and Sharp as a consortium.
Related to XDCAM.
REDCODE proprietary ultra-high resolution format used by Red One producing both 4:4:4 and 4:2:2 files. - multimedia audio/video file format owned by Red Digital Cinema Camera Company - Video compression is lossy audio compression is lossless - Saves Bayer matrix data rather than RGB data. - file extensions: rdm, rdc, r3d, mov, rsx

 

Web Video Standards
MP4 ISOmedia
H.264 MPEG standard, controlled and licensed by MPEG-LA
Matroska extensible, open source, open standard Multimedia container. MKV (matroska video), MKA (matroska audio) MKS files (subtitles), MK3D (stereoscopic/3D video). Basis for .webm (WebM) files.
WebM Google's regranding of Matroska (mkv) webmproject.org
VP8 Open license video standard - close to H.264 in quality - MPEG-LA decided to put out a call for any manufacturer who thinks VP8 might be infringing on one of their patents
OGG Theora  

The often touted HTML-5 video tag and the HTML-5 audio tag are not really very useful yet despite years of touting the effort. This is mainly because of patent challenges and a confusing welter of player addins. The ideo with having video and audio tags for HTML-5 is to have video without having to license its usage for all playing and so forth on the web. But at this point even the parameters and controls for the tags are not being developed even though there are ready models in current Quicktime and Windows Media and RealMedia player metafile specs.HTML5 Video Formats and Browser Support

Currently, there are 3 supported video formats for the <video> element: MP4, WebM, and Ogg:
http://www.w3schools.com/html5/html5_video.asp

Browser MP4 WebM Ogg
Internet Explorer 9 YES NO NO
Firefox 4.0 NO YES YES
Google Chrome 6 YES YES YES
Apple Safari 5 YES NO NO
Opera 10.6 NO YES YES
  • MP4 = MPEG 4 files with H264 video codec and AAC audio codec
  • WebM = WebM files with VP8 video codec and Vorbis audio codec
  • Ogg = Ogg files with Theora video codec and Vorbis audio codec

 

Video Format Standards - Not Really that standard

When all parties get together to come up with and engineering standard it makes life easier for anyone working on equipment or software using that standard. This is the case whether we are talking about RS-232C ports, USB ports, or JPEG photo files. Work on making JPEGs standard was begun in 1986 and the international imaging standard for JPEG was formally issued in 1992. Although there have been changes and modifications since, any JPEG can be opened by any software using the JPEG standards.

It is often assumed that TIFF (a photo file format) is also a standard but it was never an imaging standard. It was invented by Aldus as a company format which they promoted. It became an Adobe format when Adobe purchased Aldus. Because it is not an agreed on standard, in the same manner as JPEG, several variations in the file format were introduced by other authors. This includes bit order (important first vs important last) and other items. But at least most folks adapted the original Aldus specs so this format did not have a huge number of variants. Therefore it often seems like a standard and in practicle terms it pretty much is.

In video, this has not been the case. Aside from JPEG 2000's video version, Motion JPEG 2000, which uses a different compression scheme from JPEG and which encodes each and every frame, videographers are confronted with the problem of formats which are a product of market competition, not workflow cooperation. Editing programs for still photos, such as GIMP, Photoshop, etcetera, are not thrown by the camera used to shoot the picture. Still photo files are standard file types which any camera will give you. Video files are not so democratic. Video editing programs may not be able to handle files from different cameras, even files with the same file extension, or even files from expensive big-brand-name makers.

Despite various suggestions that Motion JPEG 2000 should be used as a video standard its adoption by anyone has been somewhere between non-existent and rare, sometimes showing up in proposals then disappearing. One of the reasons is market competition, with each company remaining with proprietary formats, though why this should be needed with video when it isn't needed in stills is a question which usually goes unaswered.

Another reason Motion JPEG has not been so adopted is the need for processing power. Regular JPEG is compressed using "discreet cosine" transforms while JPEG 2000 is a "wavelet" transform process which requires multiple passes and therefore a lot more horsepower. Although it is considered slightly superior as a compression mechanism and very superior in scaleable compression capability, the extra processing has kept it from still cameras at their much smaller number of frames to deal with per second. It would mean more amperage and faster onboard image processors as well as much more storage space. Motion JPEG encodes each and every frame individually. Regular video compression uses both temporal and spatial compression in which compression may be spread across more than one frame, and even then changing only those sections of the frame which change, when they change, saving only the changes, to be reconstructed later, in the editor.

To illustrate the difficulty of handling multiple formats and variations on formats here are a couple of quotes from Apple's web support page for iMovie 2009. I've included a link to the page because it is far too much material to show directly. But when you go to the page you will see a set of tables with a very long list of video cameras and whether those listed cameras are supported or not supported by iMovie 2009 and what other problems may be encountered when using each listed camera.

This kind of thing is similar to the support problems still photo editors have with RAW formats. It is all too common to be unable to open a RAW file from the previous model of the same brand of camera. It also requires a large number of updates and modifications to RAW-reading software. That means consistency is lost. A RAW file opened today may look very different from the way it looked when edited a year ago or more. That is why it is a good idea to save any RAW edits in both JPEG and TIFF formats for archive needs. Unlike a JPEG file which will have a consistent look from year to year RAW can change or can even be impossible to open, and not because the file was corrupted but because the file is no longer supported.

This is also the same kind of thing we dealt with in the early 1980s with printer driver support. If you were on the purchasing end of office software you needed to make sure each program on your computer had a printer driver for the printer you owned. So, 30 years later we have the same problem and those too young to remember think this is acceptable and just to work through it. We thought that 30 years ago too.

If you bought a new printer in the early to mid 1980s you had to make sure it would work with all your software. For that matter, if you bought a program you had to take special care that it worked with your version of DOS. It wasn't Windows or Mac then it was PC-DOS (IBM), MS-DOS, TR-DOS (Tandy-Radio Shack), and Commodore and all of them in numerous flavors which were not compatible. That was the general, more affordable market. There were a lot of other operating systems for both large computers and small.

If you worked in software then, as I did, it was exhausting and expensive to get hold of enough printers and printer manuals to be reasonably compatible with a wide variety of machines. It was common to have configuration utilities which allowed you to write your own printer drivers, because you could easily be on your own. It was also common to wire (solder) your own connectors. I had a full book of pin-out schematics for RS-2323C (serial standard) printers to go from X-brand computer to Y-brand printer and believe me I used that book. I was very relieved when the first serial-parallel adaptors came out and I could buy Centronic-Parallel printers for any computer. For a mere $70 for each printer I could throw away the soldering pencil.

Apple handled the problem by doing what a company IS department would do to control the machines under their care. Apple closely limited the number of devices it made and refused to license their architecture as open architecture to anyone else. It meant they only had to support a relative handfull of devices making their job much easier. Essentially Apple kept everything close to the vest and, in particular, made their company an end-product company with very high priced equipment. It kept their share of the market very low because of cost and because they didn't really create new enterprise software for business, such as high-end accounting or database applications. For example Apple refused to use Ethernet for many years, preferring a home-grown and very slow Apple net.

Microsoft handled the problem first in DOS by standardizing all the versions of MS-DOS for all the companies making computers to which they licensed their DOS (disk operating system). MS was not an end-product company. They made it possible for others to be end-product companies. Licensing their operating system meant thousand of large and small companies (including mom and pops) could get into the game and brought down prices massively as competition increased but it also meant non-standard equipment was invented to get beyond some current limit and that new gear had to be supported. Always a catch up job. You still needed printer drivers and so forth from each software maker but things got a whole lot easier. When they brought out Windows Microsoft did the legwork of assuring that everyone had drivers to make printers and other software work together.

Tapeless Camcorder Support (quoted from Apple site) http://support.apple.com/kb/ht3290 - link to full page

iMovie '09 works with many tapeless camcorders that record to flash memory, a hard disk drive (HDD), or DVD media. These devices use a USB 2.0 cable and include camcorders using MPEG-2 (standard definition), MPEG-4, AVCHD, and H.264 formats. If you use the AVCHD format you will need a Mac with an Intel-based Core Duo processor or better.

Apple has tested the products listed in this table. iMovie may not work with similar camcorders. Compatible iMovie features include device recognition when connected to a Mac, display of import controls for the device, and importing video to iMovie.

Due to the wide range of media types, and the different ways manufacturers store video on these media types, not all tapeless camcorders are compatible with iMovie '09.

In other words, Apple is saying, that there is no real standard that everyone adheres to strongly and that they will only support so many of the ones they have a chance to look at the code for. This is also the case with other video editor makers.

  • How useful any video file type is to you depends on a combination of:
    • Ffile type
      • indicated by the file extension such as MP4, or m2t or mpg, etcetera
    • Codec used for that type
      • there is often more than one codec for the file type
      • CODEC stands for enCode / Decode
        • encoding creates a file of a particular type
        • decoding reads a file of a particular type
      • A codec is a specification for how a file store data and how to encode as well as decode the file
    • The software written to make use of the codec
      • This is written by various companies with various abilities and qualities. So in addition to picking a codec you need to pick the best possible piece of software to encode or decode your video file. This is usually a licensed plug-in used by your video editor.
    • Various quality settings during the encoding, usually set when rendering, such as data rate (in megabits per second), or physical format such as 16:9 or 4:3 ratio, or letterbox or not, video quality settings, interlaced or progressive and on and on.

For our purposes (and budgets) Digital-8, Mini-DV, HDV and AVCHD are the most important. It is how we can reach pro-quality results on a beer budget. AVCHD (audio video codec high definition) is the latest popular format. Original on consumer cameras it is now finding its way into pro-sumer cameras. The earlier versions of this format had motion and other artifacts (distortions in the image caused by the method used to make the image). Panasonic brought out the HMC-150 in October 2008 saving all video to the very small SDHC memory cards as AVCHD. This is widely considered the first pro/semi-pro use of the AVCHD format.

HDV is stored/recorded to the same miniDV tape used in standard def video cameras. Even though the shape of the video has a 16:9 ratio HDV manages to store the 33-percent wider format as a special case of the 4:3 ratio. Even when the camera records full 1920x1080 pixel images (16:9 ratio) it stores it on the tape as 1440x1080 (4:3 ratio). Then on editing and on play back this is expanded horizontally by one-third.

HDV Video Widescreen Film
HiDef Video Camera
HDV horizontal image compression from 16:9 to 4:3 and back
1 - Aquires image: compresses to 75-percent
2 - reconstitutes image: expands to 133-percent
Film Camera - such as a Panavision camera
Anamorphic horizontal compression from 2:1 to 1:1 and back
(Note: 2:1 is not the only width-to-height ratio, just a common one and easy to show as an example)
   

This is very similar to film camera's use of anamorphic lenses, such as on Panavision cameras starting in the 1950's when movies came up with the wide format to compete with the introduction of television which was using the long established 4:3 ratio screen. Paramount invented the format in order to offer a panoramic view which they knew television could not. Television standards were set in engineering standards and could not be changed easily (note how long it is still taking television to change from 4:3 analog to 16:9 digital widescreen). Film projectors could be fitted with different apertures leaving the rest of the machine as is.

Panavision used an anamorphic lens to compress the picture horizontally on the film. The projector, then, had a corresponding anamorphic projection lens to spread the film image back to the original taking format on the screen.

The technical methods are different but the concept is the same. Each was a way to utilize existing equipment or technology for an expanded horizontal size, creating the wide screen.
1 - Film cameras were able to use the same film stock with the same 4:3 ratio apertures for the image on film.
2 - Video cameras were able to use the same miniDV tape used to store the older 4:3 ratio pictures.

History Note: The movie "Shane" was the first movie to be projected to the viewing audience in a widescreen format but it was not actually shot in widescreen. "Shane" was shot in 1951 and not released until 1953 because of a long editing process. In the meantime Paramount decided to compete with television by inventing a widescreen format.

Although Shane was originally shot in the standard 4:3 ratio (actually 1.37:1 Academy ratio) it was mostly composed of long and medium shots making it easier for the studio to crop the frames into a 1.66:1 ratio for their release. They combined a new aperture plate for the projectors and a wide-angle lens to get the panorama look and decided to shoot all their films from then on in a 1.66:1 ratio. The next year, 1954, they modified that ratio to wider 1.85:1.

Anamorphic lenses are not the only method used in film to get an image with a wider ratio. Changing the shape of the film gate for any given film size is one way, another way also increases the film size (i.e. going from 35mm to 70mm). In the end, the anamorphic-lens approach has remained for more than half a century now. Probably not so for HDV.

For video the 1920 pixel record to 1440 pixel storage to 1920 pixel display method is a bridge from standard def to hidef. But already the new crop of cameras are not only shooting native 1920x1080 but they are also storing it on either hard drive or removable solid-state media. That appears to be the new future, equivalent to the introduction of miniDV tape in the 90's.

Delivery of the full HD product is another matter. Blu-Ray appears to be too late for too much money and behind the curve for a distribution medium. Downloads (for commercial products) and direct play from memory cards is rapidly moving the market place. This would probably have been with us by 2000 had the telephone companies been willing to put in real broadband, as has the rest of the world.

Still, the manufacturers seem to be using their size and money to push for Blu-Ray by hugely overpricing DVDs, probably to push people into spending for Blu-Ray players instead of staying with DVD players with upscaling to 1080 size. We will see how that goes, but streaming seems to be gaining ground, starting with portable devices from phones to tablets. 2011 should see a large change in the direction of streaming (rather than downloading a copy and playing it) see note which follows.

Blue-Ray vs streaming note 1 Jan 2012: At the end of 2011 and starting 2012 --- Well, it would appear that the vested interests who really want to make mony by reselling video on physical media are holding on. The Blu-Ray offerings have increased as have the Blu-Ray players although the players have barely decreased in price. However, large screen televisions are vastly less expensive. A 40-inch screen in August 2011 was typically going for $1,000 to $1,300. At the end of the year I was seeing 40-inch screens at Sears for a as little as $500 dollars regular retail (not specials). That makes a hardware player more palatable.

Delivery of video products is not, however, on Blu-Ray. Unless specifically requested, clients prefer two other methods, traditional DVD and files on USB hard drives. Although OEM Blu-Ray burners for internal (inside the case) installation are available at the $100 price level they take a very long time to render and burn each Blu-Ray disk and so far duplicators are not practical for small-scale production. Instead each Blu-Ray disk has to be burned from the master file using the original computer and that can take a very long time for each disk, up to half the time for each video length.

In addition, the streaming world has not gotten any faster. So, although televisions as well as players are now designed with either or both WiFi connections and Ethernet connectors the actual download speed is a major hindrance to full HiDef without interruptions. And far from opening up the pipeline to give US citizens the kind of bandwidth other countries have enjoyed for some years now, the telecoms and media companies seem prepared to throttle the lines to kill competitors and potential competitors from using their lines for streaming video and other media content before they do themselves.

 

Analog, Digital, Widescreen and HiDef

We need to clear up a certain amount of misunderstanding. These four words are used together, especially the last three, but their meanings are separate.

Analog- Type of Transmission and Type of Storage - strength of signal corresponds to intensity being recorded
Digital - Type of Transmission and Type of Storage - signal is a recording of measurements of what is being recorded
Widescreen - description of the picture format as much wider than the traditional 4:3 ratio (in film this is a 1.37:1 Academy ratio)
HiDef - Label for the amount of detail in a video picture - the resolution of the image - analog and digital

To get HiDef either analog or digital will work although all you see today is digital. HiDef equipment available some years ago in analog equipment.

Widescreen only describes the appearance of the rectangle containing the picture and has nothing to do with resolution (SD or HD) or with whether the content is stored in digital or analog formats.

Analog and Digital tell us both how the images are transmitted and how they are stored. Analog pictures can be transmitted as digital information by converting the analog intensities into digital measurements at which point the signal becomes digital.

Analog stores information by varying the amount of signal (or the amount of frequency change - FM) according to the amount of what it is storing - the amount of color, the amount of brightness, the amount of volume and so forth. Changes in the strength of an analog signal or the frequency shift (FM) are analogous to whatever is being recorded.

Digital stores information as sampled measurements of the content. The signal itself remains at a constant strength only changing to pulse on and off in patterns which are the data recording text and numbers, much like Morse code. Each measurement is a "sample." The more samples per second the more accurate the recording.

Again, Instead of changing the intensity or the frequency of the signal, a digital recording stores the measurement of the signal's intensity. For example
* instead of sending a low-intensity signal for a low volume sound, the digital data is a low number representing that volume.
* when the volume is high, digital data is a higher number.
* the information about each measurement called a sample.


Review Quiz

he first practical video tape recorder was introduced in 1956 by Ampex after more than four years of work.

The first known video tape broadcast was November 30,

Even though the shape of the video has a 16:9 ratio manages to store the 33-percent wider format as a special case of the 4:3 ratio

Wide-screen formats started in the 1950's when movies invented the formats to compete with the introduction of which was still using the long established 4:3 ratio screen

Panavision used an lens to compress the picture horizontally on the film. The projector, then, had a corresponding anamorphic projection lens to spread the film image back to the original format on the screen.

Analog and Digital refer to different types of , to the method of creation and the type of storage media.

Widescreen is a description of the picture

HiDef and SD (Standard Def) are general labels for the amount of detail in a video picture - the of the image


Film, Video and Frames Per Second

I know it is like the revealed gospel among all too many video people who think going to 24 fps will make your video into cinema, but, a few thoughts:

  • No one thought in terms of frames per second.
  • The only measure of speed was in feet per second.

In 1927 the competition was to be first with a commercial sound-film technology. Warner was developing Vitaphone, film synched with records for sound. Fox was developing Movietone, sound on film, film printed with an optical sound track on the side. At the time there was only one commercial film gage, 35mm and one frame ratio, 4 to 3. No one counted film in frames, only in feet per minute. In 1927 Stanley Watkins was chief engineer with Western Electric and working with Warners. He got together with Warner's chief projectionist to go over standard film speeds already in use. Silent film was shot at about 60 feet per minute though it was projected at varous speeds from 60 fpm to 90 or 100+ feet per minute, because there was no sound to distort. The better houses ran films at the original speed while lesser houses ran it faster in order to get more showings per day. So Watkins decided to compromise, at the round number of 90 feet per minute.

At that rate the number of frames per second came to 24 fps, but no one needed to care about frames per second until working at other gages, such as 16mm or 8mm. Watkins, in 1961, stated that if they had really done it right they might have researched for 6 months or so and come up with a better rate. Fox was doing its research and was considering 85 feet per minute. If Fox had won that race we would have been running cinema at 22-2/3 frames per second instead of 24 fps (remember, they were thinking in feet, not in frames).

By 1931 the sound-on-film system replaced the sound-on-disk system because it was so much handier to fix when film broke even if the audio was not quite as good. With sound on disk, if the film broke you had to replace it with the same number of blank frames otherwise you would wind up out of synch. With the sound track on film, the most a break could cost you would be a seconds worth of out-of-synch sound. By 1931 the Fox system was designed to run at the same 90 feet per minute rate in order to fit in with the existing system established by Warner. So in essence pure chance and a casual compromise set the speed of film travel and easier-handling brought in the sound track on film method.

One of the arguments I often heard, and, sorry to say, repeated in good faith, was that 24 fps was a cost compromise between bad sound at 16 fps (silent shooting speed) and something faster which would mean more film so more cost. But what I, and seemingly everyone else, didn't pay attention to was that sound quality for the Warner system was independent of the film speed because the sound came from a record which played at 33 1/3 rpm, another standard spec [on this page below], though not until 1948. So, using the audio-on-disk system, the film's frame rate in terms of frames per second didn't matter.

There was never a grand aesthetic vision determining the frame rates. The original decision wasn't even about frames per second but about feet per minute. Note that we don't talk about "frameage" but we do talk about "footage" as a measure of shooting times, even for digital files which are not measured in feet. The "esthetics" of 24 frames per second "cinema" was an artifact of a practical engineering decision in the moment. We have far better cameras today but insist on remaining in 1927. The people in 1927 were not trying to stay in that year. If they had more advance tech at the time they would have set it up.

In a similar practical way the 25 frames per second television rate in Europe and the 30 frames per second rate in the US were decided for the most practical reason. They needed to have a standard clock to trigger frames on television cameras. The line current (50 hertz in Europe and 60 hertz in America) provided a great timer to synchronize video frames. And, not for esthetic reasons exactly, but because of limitations of image retention on a cathode ray tube (the phospors died out too soon), each frame was divided into two fields, overlapping each other, or in other words 50 fields in Europe and 60 fields in America per second.

Only in the last couple of decades, as video cameras and sensors exceed what film can do, has the "cinema" term been used to differentiate "mere" video from the more snobbish "cinema," usually just video at 24 frames per second. Yet the cameras are more and more interchangeable and the highest level video (as cinema) cameras, always shooting at 24 frames per second (of course) are used like social classes to claim top rank even though many of the lowest ranking video cameras are not only better than many or most of the old film cameras but are even used for cinema, such as three smart phones used for the documentary "For Sama" about a refugee couple and their baby, as they "filmed" on cameras their own flight from death threats. It is cinematic, regardless of frame rate or camera, won 59 international awards and had 40 other nominations at this writing (https://www.imdb.com/title/tt9617456/awards).

For that matter a lot of film, mostly for television, was shot at 30 frames per second. Two movies by Mike Todd, first "Oklahoma" and then "Around the World in 80 Days" were shot in two frame rates, 24 and 30. In "Oklahoma" each scene was shot twice, once with 24 and once with 30 fps. The next year with Around the World in 80 Days he shot each scene once, with both cameras strapped together, running at the same time. Mike Todd wanted a steadier picture and better image resolution (note, those are both aesthetic reasons). Mike Todd did get steadier images and better resolution but he didn't get theaters to change their equipment.

Cheers and Seinfield are two shows shot on film at 30 fps that come quickly to mind. I haven't been able to find a full list of shows. If you are shooting film for television, which is broadcast at 30 fps, why would you shoot at a different rate from the target rate? It you shoot at 24 frames per second you will have to either repeat every 4th frame or run a pull down scheme (such as 3:2 pulldown) to borrow fields from one frame and mix with the adjacent frame to flesh out the number of frames needed. Pull down methods have ghosting artifacts. You could repeat a frame but then you have to stop the film for that repeated frame and also maintain an very loose sound loop in order to keep the sound running at a steady rate over the sound head for the audio transfer. The best way, which was used on many television shows, is to shoot at the end-usage target frame rate of 30 fps.

Frames created from 24 fps video using pull down to generate another frame for playback at 30 fps. A field from one frame is joined with a field from the next frame, producing the "ghosting" artifact. Single frames of the same video shown at the 24 fps rate it was shot at.

If your editor allows it, you can take a 24 fps file and duplicate every 4th frame for playback at 30 fps. You will have a hard time noticing the expected jerkiness. US television runs at 30 fps so why shoot at a different frame rate?

To handle this task for a nominal 30 fps (actually 29.97 fps) in Vegas Pro, my preferred editor since 1999, you need to right click on the clip ("event" in Vegas terminology) and choose "Properties" from the popup menu. The in the "Video Event" tab of the dialog choose "Disable resample". Leave both project and resample rates at 1.000.

  • 30 fps in Todd-AO resulted in less flicker and a smoother picture.
  • At 24 fps you get a bit more blur for comparable shutter speeds than at 30 where you can use higher shutter speeds before "strobing" becomes a problem.
  • Remembering back to early film club days (late 1990's to early 2000's, 24 fps was wanted because the film festivals wouldn't accept video as "real" movies so everyone shot on video at 24 fps if they could (because of cost and to avoid pull-down from 30-24) and printed their video to film to get their work into "film" festivals. So, a camera which could shoot 24 fps was a holy grail for a while because it made the print-to-film easier and without the pull-down (converting one frame rate to another - where film just as frames, video has two "fields" per frame, because of the old need to interleave fields across the frame, one the odd numbered of raster lines and the other the even numbered raster lines).
  • Shooting film at 30 fps does not turn film into video any more than shooting video at 24 fps does not turn video into film.
  • 25 fps was chosen for European video because the "clocking" mechanism for the cameras was dependent on the cycles per second of European mains power of 50 cps AC power - half the power line rate with each field at the power line rate.
  • 30 fps was chosen the USA video for the same reason. Cameras used the mains power cycles rate to clock the frames/fields, in this country that is 60 cps AC power. So we get half the frame rate because each field is on the cycle rate (two fields per frame)
  • Film grain or lack there of is a factor in the look of the output
  • While video's raster method used to show up in so many lines per screen, now shows up in the same rating but differently with full-pixel sensors rather than the old imaging tubes.
  • Film's tonal range is a major factor in the look. Video supposedly doesn't have the same range. That isn't true, as a BBC white paper I ran into some years ago showed but the shape of the tonal curve is different and has to be digitally manipulated with a calibrated look-up table.

Here is a quote from a Wikipedia article I looked up to check my memories about Todd-AO  (the "AO" stood for American Optical).
Here is the URL: https://en.wikipedia.org/wiki/Todd-AO
Below, indented, is my lift copied and pasted from the page.

The original version of the Todd-AO process used a frame rate of 30 frames per second, faster than the 24 frames per second that was (and is) the standard. The difference does not seem great, but the sensitivity of the human eye to flickering declines steeply with frame rate and the small adjustment gave the film noticeably less flicker, and made it steadier and smoother than standard processes. The original system generated an image that was "almost twice as intense as any ever seen onscreen before, and so hot that the film has to be cooled as it passes through the Todd-AO projector".

Only the first two Todd-AO films, Oklahoma! and Around the World in Eighty Days, employed 30 frames per second photography. Because of the need for conventional versions at 24 frames per second, every scene of the former film was shot twice in succession: once in Todd-AO and once in 35 mm CinemaScope. The latter film was shot with two 65 mm Todd-AO cameras simultaneously, the speed of the second camera was 24 frames per second for wide release as optical reduction prints. All subsequent Todd-AO films were shot at 24 frames per second on a 65 mm negative and optically printed to 35 mm film as needed for standard distribution. In all, around 16 feature films were shot in Todd-AO.

Anyway, this is why I consider the 24 fps cinema to be un-informed, persistent, insistent, pretension - held to sincerely as if life itself will cease if they give up on the 24 fps "cinema" frame rate. Or at least not able to claim they are doing "cinema." I get tired of it. Trying to argue the point is like trying to argue religious points. Not much point.

So anyway, that is my rant.

Here is another reference about telecines, used to convert film to video:
https://en.wikipedia.org/wiki/Telecine

Below is an illustration I put together for a lesson in a course I was teaching on video. This compares film frames to video fields and from there to video frames (each with two fields). In a 3 to 2 pulldown (5 fields split up as 3 fields and 2 fields) the frames have one frame with a single image and 2 frames with images split between two frames. That is the cause of the difference in appearance in the 24 fps video on its own versus the 24 fps video in a 30 fps project where you get the slight "ghosting" image giving it an odd shifting appearance. You can see "ghosting" effect above in the two pictures on the left side of the table.

Another set of frame rates and pulldown patterns to use:

Frames per second rate start Convert to FPS Pulldown pattern
16 fps - silent standard 8mm    
18 fps - silent super 8 mm    
16 fps - silent 35 mm,
also sometimes 12 fps or lower
   
     
16 fps (15.985) NTSC 30 fps (29.97) 3:4:4:4
16 fps PAL 25 3:3:3:3:3:3:3:4
Or run film at 16.67 fps and use 3:3 pulldown
18 fps (17.982) NTSC 30 3:3:4
20 fps (19.980) NTSC 30 3:3
27.5 fps NTSC 30 3:2:2:2:2
27.5 fps PAL 25  
     
24 fps 96 fps 4:4 (4x frame repeat)
24 fps 120 fps 5:5 (5x frame repeat)
24 fps 120 fps 6:4 (3:2 with 2x deinterlacing)
     
There are more patterns used to display 24 fps from a DVD player or stream to an LCD, LED on a progressive display such as a monitor or television.


3-2 pulldown      6-4 pulldown

It was the massively lowered cost of video which allowed so many more people to produce movies. The idea that film was superior only lasted until video cameras started to get as good as or better than film cameras in terms of resolution and tonal scale control. Until then, film festivals considered video beneath them. Many videographers bought into that, thinking that film was more elevated than video, forgetting they were simply making stories with "moving" pictures, so they tried to create cinema with a change in frame rates.

(Instead of just telling the "film only" snobs what they could do with you know what. There was a funny Midsomer Murders the other night titled "Picture Perfect" in which a Brit camera club is snobbish and restrictive with a group of digital photographer. I laughed like crazy because it was so typical of such clubs and societies and because I had fun identifying the cameras and technologies of both types.)

This old snobbishness still shows up in designating video cameras with extended tonal controls and image detail and RAW files as "cinema" cameras, such as the RED line of cameras and the Black Magic cinema line. They don't use film. They remain video cameras.